433 research outputs found

    The nucleus under the microscope. A biophysical approach.

    Get PDF
    At the beginning of 1950, many researchers challenged the possibility to overcome the fundamental Abbe limit. An attempt was made by Giuliano Toraldo di Francia, who showed that the width of the point-spread function can be reduced applying a filtering technique (called apodozation) (1). In 1994, a revolutionary event took place in the field of optical microscopy: Hell described a method for circumventing the light diffraction barrier (2). In this way, details that were not visible in diffraction-limited techniques could be imaged using a fluorescence microscopy. Nowadays, these methods termed Far-field fluorescence microscopy or nanoscopy techniques, has become an indispensable tool for scientist to address important biological and biophysical questions at the single molecule level. To highlight the outstanding importance of such techniques, the Royal Swedish Academy of Sciences awarded Eric Betzig, Stefan W. Hell, and William E. Moerner the Nobel Prize in Chemistry 2014 \u201cfor the development of super-resolved fluorescence microscopy\u201d. In addition, several important technical improvements, including confocal laser scanning microscopy (CLSM) (3), multiphoton microscopy, 4Pi (4) and I5M (5) have had an important role in the field of optical microscopy. On the other side, in 2015 Boyden and colleagues developed a new method termed Expansion Microscopy (ExM), which allows expanding uniformly biological samples by increasing the relative distances among fluorescent molecules labelling specific cellular components (6). ExM permits to achieve a lateral resolution of about 65 nm, using a conventional - diffracted microscope. However, all super resolution methods demand a particular attention in the sample preparation. Achieving super resolved images require the optimization of every steps involved in the labelling process, from the expression of a fluorescent proteins to the fixation of the biological samples. In the last years, these labelling strategies have obtained a critical role in the field of fluorescence microscopy. In particular, the design and the localization precision of specific affinity probes are crucial features that can restrict the applicability of these techniques. In this work, several labelling approaches and optimization of different staining protocol for super resolution techniques were addressed. My effort was focused on STED nanoscopy and ExM, and how to optimize the labelling protocol, the fluorophores choice for a high labelling density. The optimization of the steps involved in the labelling processes allows me combining ExM with STED nanoscopy (ExSTED), to enhance the final resolution (7). In addition, these techniques were used to decipher molecular assemblies in the cellular nuclei. In particular, my attention was focused on an important layer termed nuclear envelope (NE) (8). This nuclear region encases the genetic material, maintains the regular shape of the nucleus and regulates the gene expression. NE is composed by two lipid bilayer and different class of proteins, which pass through or are strictly linked to the nuclear membranes. Nuclear pore complexes (NPCs) and nuclear lamins, two classes of proteins belonging to the NE, were investigated in this work. In particular, NPCs was used to evaluate the isotropy and calculate the expansion factor (EF) at the nanoscale level in ExM. In this work, we show that Nup153, a filamentous subunit localized in the nuclear pore basket (9), is a good reporter to verify the isotropy of the expansion process and its quantification. In addition, nuclear lamins, in particular lamin A (LA) and its mutation \u394LA50 (10), were used to investigate the physiological and pathological nuclear membrane invagination in normal and aging cells

    A minimalist model for the simulation of the structure and dynamics of disordered proteins

    Get PDF
    Proteins are one of the two fundamental classes of biomolecules (together with nucleic acids), covering basically all the functional roles in living sys- tems. In the last fifty years, a big effort was devoted to the computer sim- ulation of the dynamics of these systems, in order to get a better insight in their structure and behavior, and to complement the experimental studies. However, the size of the system, time scale reachable in simulations and ac- curacy of the representation are limited by the computer power. Although the Moore law has ensured - up to now - the exponential increase in time of the latter, currently, simulations with atomic accuracy can address a virus size only on very brief time scales, or very limited portion of the cell, while only single proteins can be represented on the macroscopic time scales. Therefore, in order to study dynamical biological systems, coarse grained models are considered a natural solution to overcome the limits of the atom- istic models. These models represent the system at a lower resolution, re- ducing the number of explicit degrees of freedom of the system and freedom providing a lighter and accelerated dynamics of system. Depending on the level of coarse graining, macroscopic time scales for systems of biologically interesting size can currently be afforded. The hierarchical organization of the protein structure naturally suggest a possible level of coarse graining, namely that of one interacting center (also called ”bead”) per amino-acid, being the latter the basic chemical unit of a protein. Among ”one-bead-models” the subclass of those with the bead placed over Cα emerges as that better representing the conformation of the backbone and the secondary structures. The class of Cα one bead models, also called ”minimalist”, is the focus of this Thesis work. In the last decade a number of minimalist models was developed, all rep- resenting the interactions by means of empirical force fields (FF) consisting of a sum of analytical or numerical terms. Different models differ by the num- ber and composition of the FF terms (more or less phisics-chemistry based), and by the parameterization strategy, which can be based over higher level theories (typically atomistic simulations) or on experimental data (i.e. data set of experimental structures, inclusion of other kind of macroscopic and thermodynamic informations). As a consequence, the model can be more or less general and transferable. Usually, accuracy and transferability are in conflict: the more bias towards a given structure is included, the more that structure will be accurately represented, but the less transferable and predictive is the model. In order to overcome this problem, most of the cur- rently available minimalist models include some a priori knowledge of the secondary or tertiary structures within the parameterization, which can be called a ”partial bias”. Clearly, the general goal is to build a model both accurate and predictive, and therefore, unbiased. This Thesis goal is to make some steps along this route. The chosen strat- egy is to follow a physics based approach, related to the fundamental nature of forces acting within the proteins. Basically, the primary structure of a pro- tein (i.e. sequence and polypeptide chain) is stabilized by covalent chemical bonds, while the secondary structure (e.g. helical or sheet-like structures) is stabilized by specific hydrogen bonds. Higher level structures (tertiary and quaternary) are stabilized by other specific interactions, such as disulphide and salt bridges. The specific aim of this work is to build a general minimalist model to be used for unstructured proteins. Therefore, no hydrogen bonding or other specific interactions are included in the FF, and the model is parameterized based on a data-set of unstructured proteins. This strategy has resulted in a quite general ad unbiased model able to reproduce the structure and dynam- ics of class proteins, namely the “intrinsically disordered proteins” (IDP), which is very interesting per se. In addition it can be considered as a zero- point approximation over which hydrogen bonding and other interactions can be added in order to build models for structured proteins, in rational and physically-based fashion. Besides the primary result (i.e. the model for IDPs, description of the dy- namics of some specific cases, etc) this work has returned interesting insight into the whole class of IDPs. First, due to the absence of stable conforma- tion, the structural data on these proteins are very elusive. Therefore already building the dataset was a particularly hard task, which can be considered a side-result of this work and lead to a deep reconsideration of the definition of secondary structure. Second, these proteins elude one of the paradigms of the biomolecular chemistry, namely the relation between structure and function: they do not have a very well defined structure, but they do have a function. Therefore, a reliable model for this class can help re-defining this paradigm, including into it dynamical information. Finally, for the same reason, this model can shed light on the behavior and function of the unstructured inter- mediates of the folding process for structured proteins

    Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap

    Full text link
    A simple model to study subspace clustering is the high-dimensional kk-Gaussian mixture model where the cluster means are sparse vectors. Here we provide an exact asymptotic characterization of the statistically optimal reconstruction error in this model in the high-dimensional regime with extensive sparsity, i.e. when the fraction of non-zero components of the cluster means ρ\rho, as well as the ratio α\alpha between the number of samples and the dimension are fixed, while the dimension diverges. We identify the information-theoretic threshold below which obtaining a positive correlation with the true cluster means is statistically impossible. Additionally, we investigate the performance of the approximate message passing (AMP) algorithm analyzed via its state evolution, which is conjectured to be optimal among polynomial algorithm for this task. We identify in particular the existence of a statistical-to-computational gap between the algorithm that require a signal-to-noise ratio λalg≄k/α\lambda_{\text{alg}} \ge k / \sqrt{\alpha} to perform better than random, and the information theoretic threshold at λit≈−kρlogâĄÏ/α\lambda_{\text{it}} \approx \sqrt{-k \rho \log{\rho}} / \sqrt{\alpha}. Finally, we discuss the case of sub-extensive sparsity ρ\rho by comparing the performance of the AMP with other sparsity-enhancing algorithms, such as sparse-PCA and diagonal thresholding.Comment: NeurIPS camera-ready versio

    Theory and applications of the Sum-Of-Squares technique

    Full text link
    The Sum-of-Squares (SOS) approximation method is a technique used in optimization problems to derive lower bounds to the optimal value of an objective function. By representing the objective function as a sum of squares in a feature space, the SOS method transforms non-convex global optimization problems into solvable semidefinite programs. This note presents an overview of the SOS method. We start with its application in finite-dimensional feature spaces and, subsequently, we extend it to infinite-dimensional feature spaces using kernels (k-SOS). Additionally, we highlight the utilization of SOS for estimating some relevant quantities in information theory, including the log-partition function.Comment: 19 pages, 4 figure

    COVID-19 Incidence and Vaccine Effectiveness in University Staff, 1 March 2020–2 April 2022

    Get PDF
    Background: University workers undergo intense social interactions due to the frequent contact with students and colleagues and lectures in crowdy conditions. The aim of our study was to assess the incidence of COVID-19 infection and vaccine effectiveness in a cohort of workers of the University of Trieste from 1 March 2020 (start of the pandemic) through 2 April 2022. Methods: The University of Trieste implemented a number of public health policies to contain the spread of SARS-CoV-2 on the campus, including prompt contact tracing, the enhanced ventilation of all premises, fomites disinfection and the mandatory use of face masks indoors. In compliance with the surveillance protocol of the local public health department, university personnel were tested for SARS-CoV-2 by polymerase chain reaction (PCR) on a nasopharyngeal swab on demand, in the event of symptoms consistent with COVID-19 or for contact tracing, following close contact with a confirmed COVID-19 case. The incidence rates of SARS-CoV-2 infections were estimated as the number of cases by the number of person-days (p-d) at risk. The multivariable Cox proportional hazard regression model was employed to investigate the risk of primary COVID-19 infection, adjusting for a number of potential confounders and expressing the risk as the adjusted hazard ratio (aHR) with a 95% confidence interval (95% CI). Results: The incidence of SARS-CoV-2 infection among the university staff was lower than that of healthcare workers (HCWs) of the same area. Compared to unvaccinated colleagues (6.55 × 10,000 p-d), the raw incidence of SARS-CoV-2 infection was higher among university workers immunized with one (7.22 × 10,000 p-d) or two (7.48 × 10,000 p-d) doses of the COVID-19 vaccine, decreasing in those receiving the booster (1.98 × 1000 p-d). The risk of infection increased only in postgraduate medical trainees (aHR = 2.16; 95% CI: 1.04; 4.48), though this was limited to the Omicron transmission period. After the implementation of the national vaccination campaign against COVID-19, workers immunized with the booster were less likely than unvaccinated workers to be infected by SARS-CoV-2 both before (aHR = 0.10; 95% CI: 0.06; 0.16) and after (aHR = 0.37; 95% CI: 0.27; 0.52) the Omicron transmission period. The vaccine effectiveness of the booster was 90% (=(1−0.10) × 100) before versus 63% (=(1−0.37) × 100) during the Omicron wave, without a significant difference between homologous (three doses of m-RNA vaccines) and heterologous (first two doses of Vaxzevria followed by a third dose of m-RNA vaccine) immunization. Conclusions: The incidence of SARS-CoV-2 infection in the university staff was lower than that of HCWs of ASUGI, likely because the testing-on-demand schedule inevitably missed asymptomatic infections. Therefore, the observed significantly protective effect of the booster dose in university personnel referred to symptomatic SARS-CoV-2 infections. The infection prevention and control policies implemented by the University of Trieste managed to equalize the biological risk between the administrative and teaching staff

    Learning Two-Layer Neural Networks, One (Giant) Step at a Time

    Full text link
    We study the training dynamics of shallow neural networks, investigating the conditions under which a limited number of large batch gradient descent steps can facilitate feature learning beyond the kernel regime. We compare the influence of batch size and that of multiple (but finitely many) steps. Our analysis of a single-step process reveals that while a batch size of n=O(d)n = O(d) enables feature learning, it is only adequate for learning a single direction, or a single-index model. In contrast, n=O(d2)n = O(d^2) is essential for learning multiple directions and specialization. Moreover, we demonstrate that ``hard'' directions, which lack the first ℓ\ell Hermite coefficients, remain unobserved and require a batch size of n=O(dℓ)n = O(d^\ell) for being captured by gradient descent. Upon iterating a few steps, the scenario changes: a batch-size of n=O(d)n = O(d) is enough to learn new target directions spanning the subspace linearly connected in the Hermite basis to the previously learned directions, thereby a staircase property. Our analysis utilizes a blend of techniques related to concentration, projection-based conditioning, and Gaussian equivalence that are of independent interest. By determining the conditions necessary for learning and specialization, our results highlight the interaction between batch size and number of iterations, and lead to a hierarchical depiction where learning performance exhibits a stairway to accuracy over time and batch size, shedding new light on feature learning in neural networks

    Guided implant surgery and sinus lift in severely resorbed maxillae: A retrospective clinical study with up to 10 years of follow-up

    Get PDF
    Objectives In the posterior maxilla, due to the presence of maxillary sinus, residual bone height lower than 3mm is a critical factor that can affect implant stability and survival. The use of guided surgery may facilitate the surgical procedures and the implant insertion in case of severely resorbed maxillae. Moreover, it may have beneficial effects on the long-term survival and success of implant-supported restorations. This study aimed to evaluate implant supported restorations on severely resorbed maxilla (<3 mm) after sinus lift with collagenated xenograft and guided surgery. Methods Forty-three patients with need for implant rehabilitation and residual bone height between 1 and 3 mm were recruited. Surgical and prosthetical aspects were planned following digital approach with the use of Realguide 5.0 (3diemme, Varese, Italy). Lateral window sinus lift was performed and implants were placed simultaneously to the augmentation procedure with a tooth-supported pilot drill surgical template. A pre-hydrated collagenated porcine bone matrix was adopted as regenerative material. Computer-aided-design/computer-aided-manufacturing (CAD/CAM) restorations were delivered after six months of healing. Milled titanium chamfer abutments with CAD/CAM crowns were used. Bone height at implant site level was measured using an image software analysis applied to the pre- and post-surgical radiographs and at the follow-up. Biological and technical complications were recorded during all the follow-up periods. Results Fifty-four sinus were treated. After a mean follow-up time of 5.11 years (SD: 2.47), no implants were lost nor showed signs of disease. The mean pristine bone height was 2.07 mm (SD: 075). At the final evaluation the augmented sinus height was 12.83 mm (SD: 1.23). Two cases experienced minor perforation of the membrane, while five patients developed minimal post-operative complications, completely resolved with pharmacologic therapy. No mid-term biological complications were experienced by the patients. No cases experienced peri-implant mucositis and peri-implantitis during the whole follow-up period. Four patients (7.4%) faced an unscrewing of the prosthesis. Conclusions The present study showed the efficacy in the mid-term of the digital planning and the guided surgery in restoring severely resorbed posterior maxilla with dental implants
    • 

    corecore